domain adaptive object re-id
Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID
Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain to tackle the open-class re-identification problems. Although state-of-the-art pseudo-label-based methods have achieved great success, they did not make full use of all valuable information because of the domain gap and unsatisfying clustering performance. To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory. The hybrid memory dynamically generates source-domain class-level, target-domain cluster-level and un-clustered instance-level supervisory signals for learning feature representations. Different from the conventional contrastive learning strategy, the proposed framework jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances. Most importantly, the proposed self-paced method gradually creates more reliable clusters to refine the hybrid memory and learning targets, and is shown to be the key to our outstanding performance. Our method outperforms state-of-the-arts on multiple domain adaptation tasks of object re-ID and even boosts the performance on the source domain without any extra annotations. Our generalized version on unsupervised object re-ID surpasses state-of-the-art algorithms by considerable 16.7% and 7.9% on Market-1501 and MSMT17 benchmarks.
Review for NeurIPS paper: Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID
Weaknesses: - The main idea of this method is unified contrastive learning. However, the strategy of joint learning of source and target domain is not new although different methods implement with different losses (e.g., in [57,58]). It is also natural that the performance on source domain with joint learning of source and target domains is higher than finetuing with target data only. Besides, the form of non-parametric contrastive learning is widely used in general unsupervised visual representation learning methods (such as MoCo and SimCLR) and is not new in this method. It may meet with the current UDA benchmarks but the generality of this method based on such assumption is limited in those real-world practical application scenarios where no prior knowledge are available on target data. Existing methods which optimize source and target domains separately thus show more advantages in this aspect.
Review for NeurIPS paper: Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID
Three of the four reviewers originally recommended marginal accept or accept (7, 6, 6) as they felt the paper provided a good empirical contribution to the field of adaptive re-identification and its results were strong. R9 was more negative and had concerns around experiments. One reviewer pointed out that the DukeMTMC extensively used in the paper has been taken down 12 months ago and its use should be discontinued. Because of the ethical concerns around this, the paper underwent additional review by the ethics panel, which recommended that the dataset should NOT be used in an accepted NeurIPS paper. Some excerpts from the ethics reviewers are below: -- "... the dataset collection involved non-consensual video surveillance of students on Duke University campus. It is unlikely that all students even knew they were being recorded, and their relative lack of power with respect to the institution surveilling them also raises concerns about the ability to meaningfully object to the surveillance."
Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID
Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain to tackle the open-class re-identification problems. Although state-of-the-art pseudo-label-based methods have achieved great success, they did not make full use of all valuable information because of the domain gap and unsatisfying clustering performance. To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory. The hybrid memory dynamically generates source-domain class-level, target-domain cluster-level and un-clustered instance-level supervisory signals for learning feature representations. Different from the conventional contrastive learning strategy, the proposed framework jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances.